Current Issue : April - June Volume : 2019 Issue Number : 2 Articles : 5 Articles
Estimation of human emotions plays an important role in the development of modern\nbrain-computer interface devices like the Emotiv EPOC+ headset. In this paper, we present\nan experiment to assess the classification accuracy of the emotional states provided by the headsetâ??s\napplication programming interface (API). In this experiment, several sets of images selected from the\nInternational Affective Picture System (IAPS) dataset are shown to sixteen participants wearing the\nheadset. Firstly, the participantsâ?? responses in form of a self-assessment manikin questionnaire to the\nemotions elicited are compared with the validated IAPS predefined valence, arousal and dominance\nvalues. After statistically demonstrating that the responses are highly correlated with the IAPS\nvalues, several artificial neural networks (ANNs) based on the multilayer perceptron architecture\nare tested to calculate the classification accuracy of the Emotiv EPOC+ API emotional outcomes.\nThe best result is obtained for an ANN configuration with three hidden layers, and 30, 8 and 3\nneurons for layers 1, 2 and 3, respectively. This configuration offers 85% classification accuracy,\nwhich means that the emotional estimation provided by the headset can be used with high confidence\nin real-time applications that are based on usersâ?? emotional states. Thus the emotional states given\nby the headsetâ??s API may be used with no further processing of the electroencephalogram signals\nacquired from the scalp, which would add a level of difficulty...
Thecomplexity of nowadays car manufacturing processes increases constantly due to the increasing number of electronic and digital\nfeatures in cars as well as the shorter life cycle of car designs, which raises the need for faster adaption to new carmodels. However,\nthe ongoing digitalization of production and working contexts offers the chance to support the worker in production using digital\ninformation as well as innovative, interactive, and digital devices.Therefore, in this work we investigate a representative production\nstep in a long-term project together with a German car manufacturer, which is structured into three phases. In the first phase, we\ninvestigated the working process empirically and developed a comprehensive and innovative user interface design, which addresses\nvarious types of interactive devices. Building up on this, we developed the device score model, which is designed to investigate\ninteractive system and user interface in production context due to ergonomics, UI design, performance, technology acceptance,\nand user experience. This work was conducted in the second phase of the project, in which we used this model to investigate\nthe subjective suitability of six innovative device setups that implement the user interface design developed in phase one in an\nexperimental setup with 67 participants at two locations in south Germany.The major result showed that the new user interface\ndesign run on a smart phone is the most suitable setup for future interactive systems in car manufacturing. In the third and final\nphase, we investigated the suitability of the two best rated devices for long term use by two workers using the system during a full\nshift. These two systems were compared with the standard system used.The major conclusion is that smartphones as well as AR\nglasses show very high potential to increase performance in production if used in a well-designed fashion....
We consider the essence of human intelligence to be the ability to mentally (internally) construct a world in the form of stories\nthrough interactions with external environments. Understanding the principles of this mechanism is vital for realizing a humanlike\nand autonomous artificial intelligence, but there are extremely complex problems involved. From this perspective, we propose\na conceptual-level theory for the computational modeling of generative narrative cognition. Our basic idea can be described as\nfollows: stories are representational elements forming an agentâ??s mental world and are also living objects that have the power of\nself-organization. In this study, we develop this idea by discussing the complexities of the internal structure of a story and the\norganizational structure of a mentalworld. In particular,we classify the principles of the self-organization of a mentalworld into five\ntypes of generative actions, i.e., connective, hierarchical, contextual, gathering, and adaptive. An integrative cognition is explained\nwith these generative actions in the form of a distributed multiagent system of stories....
This paper compares traditional machine learning models, i.e. Support Vector\nMachine, k-Nearest Neighbors, Decision Tree and Random Forest, with\nFeedforward Neural Network and Long Short-Term Memory. We observe\nthat the two neural networks achieve higher accuracies than traditional models.\nThis paper also tries to figure out whether dropout can improve accuracy\nof neural networks. We observe that for Feedforward Neural Network, applying\ndropout can lead to better performances in certain cases but worse\nperformances in others. The influence of dropout on LSTM models is small.\nTherefore, using dropout does not guarantee higher accuracy....
Symbolic gestures are the hand postures with some conventionalized meanings. They are static gestures that one can perform in\na very complex environment containing variations in rotation and scale without using voice. The gestures may be produced in\ndifferent illumination conditions or occluding background scenarios. Any hand gesture recognition system should find enough\ndiscriminative features, such as hand-finger contextual information. However, in existing approaches, depth information of hand\nfingers that represents finger shapes is utilized in limited capacity to extract discriminative features of fingers. Nevertheless, if\nwe consider finger bending information (i.e., a finger that overlaps palm), extracted from depth map, and use them as local\nfeatures, static gestures varying ever so slightly can become distinguishable. Our work here corroborated this idea and we have\ngenerated depth silhouettes with variation in contrast to achievemore discriminative keypoints. This approach, in turn, improved\nthe recognition accuracy up to 96.84%. We have applied Scale-Invariant Feature Transform (SIFT) algorithm which takes the\ngenerated depth silhouettes as input and produces robust feature descriptors as output. These features (after converting into unified\ndimensional feature vectors) are fed into a multiclass Support VectorMachine (SVM) classifier to measure the accuracy. We have\ntested our results with a standard dataset containing 10 symbolic gesture representing 10 numeric symbols (0-9). After that we have\nverified and compared our results among depth images, binary images, and images consisting of the hand-finger edge information\ngenerated from the same dataset. Our results show higher accuracy while applying SIFT features on depth images. Recognizing\nnumeric symbols accuratelyperformed through hand gestures has a huge impact on differentHuman-Computer Interaction (HCI)\napplications including augmented reality, virtual reality, and other fields....
Loading....